2,745 research outputs found

    An Improved Approximate Consensus Algorithm in the Presence of Mobile Faults

    Full text link
    This paper explores the problem of reaching approximate consensus in synchronous point-to-point networks, where each pair of nodes is able to communicate with each other directly and reliably. We consider the mobile Byzantine fault model proposed by Garay '94 -- in the model, an omniscient adversary can corrupt up to ff nodes in each round, and at the beginning of each round, faults may "move" in the system (i.e., different sets of nodes may become faulty in different rounds). Recent work by Bonomi et al. '16 proposed a simple iterative approximate consensus algorithm which requires at least 4f+14f+1 nodes. This paper proposes a novel technique of using "confession" (a mechanism to allow others to ignore past behavior) and a variant of reliable broadcast to improve the fault-tolerance level. In particular, we present an approximate consensus algorithm that requires only 7f/2+1\lceil 7f/2\rceil + 1 nodes, an f/2\lfloor f/2 \rfloor improvement over the state-of-the-art algorithms. Moreover, we also show that the proposed algorithm is optimal within a family of round-based algorithms

    Collaborative Computation in Self-Organizing Particle Systems

    Full text link
    Many forms of programmable matter have been proposed for various tasks. We use an abstract model of self-organizing particle systems for programmable matter which could be used for a variety of applications, including smart paint and coating materials for engineering or programmable cells for medical uses. Previous research using this model has focused on shape formation and other spatial configuration problems (e.g., coating and compression). In this work we study foundational computational tasks that exceed the capabilities of the individual constant size memory of a particle, such as implementing a counter and matrix-vector multiplication. These tasks represent new ways to use these self-organizing systems, which, in conjunction with previous shape and configuration work, make the systems useful for a wider variety of tasks. They can also leverage the distributed and dynamic nature of the self-organizing system to be more efficient and adaptable than on traditional linear computing hardware. Finally, we demonstrate applications of similar types of computations with self-organizing systems to image processing, with implementations of image color transformation and edge detection algorithms

    Time-Efficient Read/Write Register in Crash-prone Asynchronous Message-Passing Systems

    Get PDF
    The atomic register is certainly the most basic object of computing science. Its implementation on top of an n-process asynchronous message-passing system has received a lot of attention. It has been shown that t \textless{} n/2 (where t is the maximal number of processes that may crash) is a necessary and sufficient requirement to build an atomic register on top of a crash-prone asynchronous message-passing system. Considering such a context, this paper visits the notion of a fast implementation of an atomic register, and presents a new time-efficient asynchronous algorithm. Its time-efficiency is measured according to two different underlying synchrony assumptions. Whatever this assumption, a write operation always costs a round-trip delay, while a read operation costs always a round-trip delay in favorable circumstances (intuitively, when it is not concurrent with a write). When designing this algorithm, the design spirit was to be as close as possible to the one of the famous ABD algorithm (proposed by Attiya, Bar-Noy, and Dolev)

    2-Amino­benzothia­zolium 2,4-dicarboxy­benzoate monohydrate

    Get PDF
    Cocrystallization of 2-amino­benzothia­zole with benzene-1,2,4-tricarboxylic acid in a mixed solvent affords the title ternary cocrystal, C7H7N2S+·C9H5O6 −·H2O, in which one of the carboxyl groups of the benzene­tricarboxylic acid is deproton­ated and the heterocyclic N atom of the 2-amino­benzothia­zole is protonated. In the crystal, inter­molecular N—H⋯O and O—H⋯O hydrogen-bonding inter­actions stabilize the packing

    How Long It Takes for an Ordinary Node with an Ordinary ID to Output?

    Full text link
    In the context of distributed synchronous computing, processors perform in rounds, and the time-complexity of a distributed algorithm is classically defined as the number of rounds before all computing nodes have output. Hence, this complexity measure captures the running time of the slowest node(s). In this paper, we are interested in the running time of the ordinary nodes, to be compared with the running time of the slowest nodes. The node-averaged time-complexity of a distributed algorithm on a given instance is defined as the average, taken over every node of the instance, of the number of rounds before that node output. We compare the node-averaged time-complexity with the classical one in the standard LOCAL model for distributed network computing. We show that there can be an exponential gap between the node-averaged time-complexity and the classical time-complexity, as witnessed by, e.g., leader election. Our first main result is a positive one, stating that, in fact, the two time-complexities behave the same for a large class of problems on very sparse graphs. In particular, we show that, for LCL problems on cycles, the node-averaged time complexity is of the same order of magnitude as the slowest node time-complexity. In addition, in the LOCAL model, the time-complexity is computed as a worst case over all possible identity assignments to the nodes of the network. In this paper, we also investigate the ID-averaged time-complexity, when the number of rounds is averaged over all possible identity assignments. Our second main result is that the ID-averaged time-complexity is essentially the same as the expected time-complexity of randomized algorithms (where the expectation is taken over all possible random bits used by the nodes, and the number of rounds is measured for the worst-case identity assignment). Finally, we study the node-averaged ID-averaged time-complexity.Comment: (Submitted) Journal versio

    Verifying Policy Enforcers

    Get PDF
    Policy enforcers are sophisticated runtime components that can prevent failures by enforcing the correct behavior of the software. While a single enforcer can be easily designed focusing only on the behavior of the application that must be monitored, the effect of multiple enforcers that enforce different policies might be hard to predict. So far, mechanisms to resolve interferences between enforcers have been based on priority mechanisms and heuristics. Although these methods provide a mechanism to take decisions when multiple enforcers try to affect the execution at a same time, they do not guarantee the lack of interference on the global behavior of the system. In this paper we present a verification strategy that can be exploited to discover interferences between sets of enforcers and thus safely identify a-priori the enforcers that can co-exist at run-time. In our evaluation, we experimented our verification method with several policy enforcers for Android and discovered some incompatibilities.Comment: Oliviero Riganelli, Daniela Micucci, Leonardo Mariani, and Yli\`es Falcone. Verifying Policy Enforcers. Proceedings of 17th International Conference on Runtime Verification (RV), 2017. (to appear

    Approximate Consensus in Highly Dynamic Networks: The Role of Averaging Algorithms

    Full text link
    In this paper, we investigate the approximate consensus problem in highly dynamic networks in which topology may change continually and unpredictably. We prove that in both synchronous and partially synchronous systems, approximate consensus is solvable if and only if the communication graph in each round has a rooted spanning tree, i.e., there is a coordinator at each time. The striking point in this result is that the coordinator is not required to be unique and can change arbitrarily from round to round. Interestingly, the class of averaging algorithms, which are memoryless and require no process identifiers, entirely captures the solvability issue of approximate consensus in that the problem is solvable if and only if it can be solved using any averaging algorithm. Concerning the time complexity of averaging algorithms, we show that approximate consensus can be achieved with precision of ε\varepsilon in a coordinated network model in O(nn+1log1ε)O(n^{n+1} \log\frac{1}{\varepsilon}) synchronous rounds, and in O(ΔnnΔ+1log1ε)O(\Delta n^{n\Delta+1} \log\frac{1}{\varepsilon}) rounds when the maximum round delay for a message to be delivered is Δ\Delta. While in general, an upper bound on the time complexity of averaging algorithms has to be exponential, we investigate various network models in which this exponential bound in the number of nodes reduces to a polynomial bound. We apply our results to networked systems with a fixed topology and classical benign fault models, and deduce both known and new results for approximate consensus in these systems. In particular, we show that for solving approximate consensus, a complete network can tolerate up to 2n-3 arbitrarily located link faults at every round, in contrast with the impossibility result established by Santoro and Widmayer (STACS '89) showing that exact consensus is not solvable with n-1 link faults per round originating from the same node

    Testing Divergent Transition Systems

    Get PDF

    How Many Cooks Spoil the Soup?

    Get PDF
    In this work, we study the following basic question: "How much parallelism does a distributed task permit?" Our definition of parallelism (or symmetry) here is not in terms of speed, but in terms of identical roles that processes have at the same time in the execution. We initiate this study in population protocols, a very simple model that not only allows for a straightforward definition of what a role is, but also encloses the challenge of isolating the properties that are due to the protocol from those that are due to the adversary scheduler, who controls the interactions between the processes. We (i) give a partial characterization of the set of predicates on input assignments that can be stably computed with maximum symmetry, i.e., Θ(Nmin)\Theta(N_{min}), where NminN_{min} is the minimum multiplicity of a state in the initial configuration, and (ii) we turn our attention to the remaining predicates and prove a strong impossibility result for the parity predicate: the inherent symmetry of any protocol that stably computes it is upper bounded by a constant that depends on the size of the protocol.Comment: 19 page
    corecore